Skip to content

Add a FailoverChannel wrapper on top of IsolationChannel to maintain a set of primary and failover channel.#37840

Open
parveensania wants to merge 8 commits intoapache:masterfrom
parveensania:failover-channel
Open

Add a FailoverChannel wrapper on top of IsolationChannel to maintain a set of primary and failover channel.#37840
parveensania wants to merge 8 commits intoapache:masterfrom
parveensania:failover-channel

Conversation

@parveensania
Copy link
Copy Markdown
Contributor

Adds a FailoverChannel wrapper class on top of IsolationChannels to maintain primary channel and failover channel and fallback to failover channel if connectivity over primary channel cannot be established. The primary channel will again be retried after a period.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the robustness of gRPC communication within Dataflow workers by implementing a failover mechanism for channels. It allows the system to gracefully handle primary channel connectivity issues by switching to a fallback channel and periodically attempting to restore the primary connection, thereby enhancing the overall stability and reliability of the worker's interaction with the Windmill service.

Highlights

  • New FailoverChannel Implementation: Introduced a new FailoverChannel class that wraps a primary and a fallback gRPC ManagedChannel, enabling automatic failover to the fallback channel if the primary becomes unavailable.
  • Resilient Channel Management: The FailoverChannel includes logic to retry the primary channel after a configurable cooling period (1 hour), enhancing the resilience of gRPC communication.
  • Integration into StreamingDataflowWorker: Modified StreamingDataflowWorker to conditionally create and use the FailoverChannel within the ChannelCache if isolated channels are enabled, leveraging dispatcher endpoints for fallback.
  • gRPC Request Enhancements: Updated the GetWorkRequest proto definition to include computation_id_filter and backend_worker_token fields, and integrated the backend_worker_token into the FanOutStreamingEngineWorkerHarness.
  • Testing and Utilities: Added comprehensive unit tests for FailoverChannel and refactored common test utility classes like NoopClientCall into a dedicated file for better organization.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/StreamingDataflowWorker.java
    • Imported new classes for failover channel and credentials.
    • Modified createChannelCache to accept GrpcDispatcherClient and to construct a FailoverChannel when isolated channels are enabled, using primary and fallback IsolationChannel instances.
  • runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/streaming/harness/FanOutStreamingEngineWorkerHarness.java
    • Updated createAndStartWindmillStreamSender to include backendWorkerToken in the GetWorkRequest if available from the endpoint.
  • runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/GrpcDispatcherClient.java
    • Changed the visibility of getDispatcherEndpoints() method from package-private to public.
  • runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
    • Added a new FailoverChannel class that implements ManagedChannel and provides failover logic between a primary and a fallback channel, with a cooling period for retrying the primary.
  • runners/google-cloud-dataflow-java/worker/src/test/java/org/apache/beam/runners/dataflow/worker/streaming/harness/FanOutStreamingEngineWorkerHarnessTest.java
    • Modified getWorkRequest helper to accept a backendWorkerToken parameter.
    • Updated testStreamsStartCorrectly to verify createDirectGetWorkStream calls with specific worker tokens.
  • runners/google-cloud-dataflow-java/worker/src/test/java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannelTest.java
    • Added a new test class FailoverChannelTest to verify the failover and retry mechanisms of the FailoverChannel, including credential handling.
  • runners/google-cloud-dataflow-java/worker/src/test/java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/IsolationChannelTest.java
    • Removed the NoopClientCall and NoopClientCallListener inner classes, as they were moved to a dedicated file.
    • Changed NoopMarshaller to be public.
  • runners/google-cloud-dataflow-java/worker/src/test/java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/NoopClientCall.java
    • Added a new NoopClientCall class and its inner NoopClientCallListener for use in gRPC client testing.
  • runners/google-cloud-dataflow-java/worker/windmill/src/main/proto/windmill.proto
    • Added computation_id_filter and backend_worker_token fields to the GetWorkRequest message.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@parveensania
Copy link
Copy Markdown
Contributor Author

R: @arunpandianp

@parveensania
Copy link
Copy Markdown
Contributor Author

R: @scwhittle

@github-actions
Copy link
Copy Markdown
Contributor

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment assign set of reviewers

@parveensania
Copy link
Copy Markdown
Contributor Author

assign set of reviewers

@github-actions
Copy link
Copy Markdown
Contributor

Assigning reviewers:

R: @Abacn added as fallback since no labels match configuration

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@parveensania
Copy link
Copy Markdown
Contributor Author

stop reviewer notifications

@parveensania
Copy link
Copy Markdown
Contributor Author

R: @arunpandianp @scwhittle

@github-actions
Copy link
Copy Markdown
Contributor

Stopping reviewer notifications for this pull request: requested by reviewer. If you'd like to restart, comment assign set of reviewers

@github-actions
Copy link
Copy Markdown
Contributor

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment assign set of reviewers

@arunpandianp
Copy link
Copy Markdown
Contributor

When looking for similar implementations, I came across GcpMultiEndpointChannel https://github.com/GoogleCloudPlatform/grpc-gcp-java/blob/master/grpc-gcp/src/main/java/com/google/cloud/grpc/GcpMultiEndpointChannel.java

GcpMultiEndpointChannel uses the channel ConnectivityStatus to determine which channel to use. Will it be more robust, if FailoverChannel uses ConnectivityStatus instead if RPC status to failover?

Thinking something like wait X seconds for primary to become ready the first time and failover to the fallback channel if it takes more than X minutes. We can let primary retry connections in the background and switch to it whenever it becomes ready.

@parveensania
Copy link
Copy Markdown
Contributor Author

When looking for similar implementations, I came across GcpMultiEndpointChannel https://github.com/GoogleCloudPlatform/grpc-gcp-java/blob/master/grpc-gcp/src/main/java/com/google/cloud/grpc/GcpMultiEndpointChannel.java

GcpMultiEndpointChannel uses the channel ConnectivityStatus to determine which channel to use. Will it be more robust, if FailoverChannel uses ConnectivityStatus instead if RPC status to failover?

Thinking something like wait X seconds for primary to become ready the first time and failover to the fallback channel if it takes more than X minutes. We can let primary retry connections in the background and switch to it whenever it becomes ready.

I went for a hybrid approach, check both connection state + RPC status. ConnectionState could be transient errors, so we move back to primary as soon as state changes to READY. RPC status can capture server side issues too, like backend not responding (for instance requests getting rejected by security policies, there could be other reasons too). For this I've used longer cooling period before we re-try primary. WDYT?

currentFlowControlSettings),
currentFlowControlSettings.getOnReadyThresholdBytes());
ManagedChannel primaryChannel =
IsolationChannel.create(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since it's being setup this way IsolationChannel connectivity callbacks are going to be what is used. I'm not sure how that will work since it internally has multiple channels. Looking it seems just has the default ManagedChannel implementation which throws unimplemented exception.

What about having IsolationChannel on top of fallback channels? That seems simpler to me since IsolationChannel just internally creates the separate channels and otherwise doesn't do much than forward things on.

It would be good to have a unit test of whatever setup we do use so that we flush out the issues there instead of requiring an integration test.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed this. IsolationChannel now wraps FailoverChannel which creates two channels per active RPC.

The original intent was to keep IsolationChannel unmodified (since it is used by dispatcher client) and handle fallback at per-worker level. The new ordering (IsolationChannel over FailoverChannel) changes the semantic to per-RPC failover. Which means in case of connectivity issues, each RPC would independently discover the failure and switch at different times, rather than switching together in a coordinated way.

I do agree managing state at per RPC level seems to be less error prone, but would like to callout this semantic change.

Copy link
Copy Markdown
Contributor

@scwhittle scwhittle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple more comments, thanks!

* When toFallback is false (primary recovered) it clears all fallback flags and returns true if
* recovery actually changed state, so the caller can log it.
*/
synchronized boolean transitionFallback(boolean toFallback, long nowNanos) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: how about two separate methods? the bool is just forking internally

I think then you could name them to reflect where they are called then,
markPrimaryReady()
notePrimaryRpcFailure()

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops dropped part of that comment, I think that we may also want to only transition to fallback after X continous seconds of rpc failures without any responses. If the method is notePrimaryRpcStatus(bool success) you can keep track of the time and then only fallback if the most recent failure is X seconds past first continuous observed.

Since we expect that requiring fallback to be very rare, it seems like we should be cautious to enable fallback.

serviceAddress,
workerOptions.getWindmillServiceRpcChannelAliveTimeoutSec(),
currentFlowControlSettings),
remoteChannel(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe the fallback parameter should be Supplier that FailoverChannel will call at-most-once but which could internally defer to calling until we actually want to fall-over (can wrap the provided supplier with Suppliers.memoize). Then we could avoid creating a channel to dispatcher if we never fallback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants